Goto

Collaborating Authors

 neuro-symbolic program learning


PLANS: Neuro-Symbolic Program Learning from Videos

Neural Information Processing Systems

Recent years have seen the rise of statistical program learning based on neural models as an alternative to traditional rule-based systems for programming by example. Rule-based approaches offer correctness guarantees in an unsupervised way as they inherently capture logical rules, while neural models are more realistically scalable to raw, high-dimensional input, and provide resistance to noisy I/O specifications. We introduce PLANS (Program LeArning from Neurally inferred Specifications), a hybrid model for program synthesis from visual observations that gets the best of both worlds, relying on (i) a neural architecture trained to extract abstract, high-level information from each raw individual input (ii) a rule-based system using the extracted information as I/O specifications to synthesize a program capturing the different observations. In order to address the key challenge of making PLANS resistant to noise in the network's output, we introduce a dynamic filtering algorithm for I/O specifications based on selective classification techniques. We obtain state-of-the-art performance at program synthesis from diverse demonstration videos in the Karel and ViZDoom environments, while requiring no ground-truth program for training.


Review for NeurIPS paper: PLANS: Neuro-Symbolic Program Learning from Videos

Neural Information Processing Systems

Relation to Prior Work: The relation to Ellis 2018 (which the authors discuss) should be reframed. They also learn to infer specifications from noisy perceptual input, which are then fed to a downstream symbolic solver, and also addresses the challenge of uncertainty over specifications, albeit in a Bayesian way rather than via the heuristics proposed here. Could you similarly situate your system in a probabilistic framework, and resolve the ambiguity over specs in a less heuristic manner? Would that fare better or worse on your data sets? I feel this is the main substantive difference, rather than the details which are presently emphasized in the text.


PLANS: Neuro-Symbolic Program Learning from Videos

Neural Information Processing Systems

Recent years have seen the rise of statistical program learning based on neural models as an alternative to traditional rule-based systems for programming by example. Rule-based approaches offer correctness guarantees in an unsupervised way as they inherently capture logical rules, while neural models are more realistically scalable to raw, high-dimensional input, and provide resistance to noisy I/O specifications. We introduce PLANS (Program LeArning from Neurally inferred Specifications), a hybrid model for program synthesis from visual observations that gets the best of both worlds, relying on (i) a neural architecture trained to extract abstract, high-level information from each raw individual input (ii) a rule-based system using the extracted information as I/O specifications to synthesize a program capturing the different observations. In order to address the key challenge of making PLANS resistant to noise in the network's output, we introduce a dynamic filtering algorithm for I/O specifications based on selective classification techniques. We obtain state-of-the-art performance at program synthesis from diverse demonstration videos in the Karel and ViZDoom environments, while requiring no ground-truth program for training.